Goto

Collaborating Authors

 coordinator agent


Sentinel Agents for Secure and Trustworthy Agentic AI in Multi-Agent Systems

Gosmar, Diego, Dahl, Deborah A.

arXiv.org Artificial Intelligence

This paper proposes a novel architectural framework aimed at enhancing security and reliability in multi-agent systems (MAS). A central component of this framework is a network of Sentinel Agents, functioning as a distributed security layer that integrates techniques such as semantic analysis via large language models (LLMs), behavioral analytics, retrieval-augmented verification, and cross-agent anomaly detection. Such agents can potentially oversee inter-agent communications, identify potential threats, enforce privacy and access controls, and maintain comprehensive audit records. Complementary to the idea of Sentinel Agents is the use of a Coordinator Agent. The Coordinator Agent supervises policy implementation, and manages agent participation. In addition, the Coordinator also ingests alerts from Sentinel Agents. Based on these alerts, it can adapt policies, isolate or quarantine misbehaving agents, and contain threats to maintain the integrity of the MAS ecosystem. This dual-layered security approach, combining the continuous monitoring of Sentinel Agents with the governance functions of Coordinator Agents, supports dynamic and adaptive defense mechanisms against a range of threats, including prompt injection, collusive agent behavior, hallucinations generated by LLMs, privacy breaches, and coordinated multi-agent attacks. In addition to the architectural design, we present a simulation study where 162 synthetic attacks of different families (prompt injection, hallucination, and data exfiltration) were injected into a multi-agent conversational environment. The Sentinel Agents successfully detected the attack attempts, confirming the practical feasibility of the proposed monitoring approach. The framework also offers enhanced system observability, supports regulatory compliance, and enables policy evolution over time.


A semi-centralized multi-agent RL framework for efficient irrigation scheduling

Agyeman, Bernard T., Decard-Nelson, Benjamin, Liu, Jinfeng, Shah, Sirish L.

arXiv.org Artificial Intelligence

This paper proposes a Semi-Centralized Multi-Agent Reinforcement Learning (SCMARL) approach for irrigation scheduling in spatially variable agricultural fields, where management zones address spatial variability. The SCMARL framework is hierarchical in nature, with a centralized coordinator agent at the top level and decentralized local agents at the second level. The coordinator agent makes daily binary irrigation decisions based on field-wide conditions, which are communicated to the local agents. Local agents determine appropriate irrigation amounts for specific management zones using local conditions. The framework employs state augmentation approach to handle non-stationarity in the local agents' environments. An extensive evaluation on a large-scale field in Lethbridge, Canada, compares the SCMARL approach with a learning-based multi-agent model predictive control scheduling approach, highlighting its enhanced performance, resulting in water conservation and improved Irrigation Water Use Efficiency (IWUE). Notably, the proposed approach achieved a 4.0% savings in irrigation water while enhancing the IWUE by 6.3%.


Multi-Objective Optimization Using Adaptive Distributed Reinforcement Learning

Tan, Jing, Khalili, Ramin, Karl, Holger

arXiv.org Artificial Intelligence

The Intelligent Transportation System (ITS) environment is known to be dynamic and distributed, where participants (vehicle users, operators, etc.) have multiple, changing and possibly conflicting objectives. Although Reinforcement Learning (RL) algorithms are commonly applied to optimize ITS applications such as resource management and offloading, most RL algorithms focus on single objectives. In many situations, converting a multi-objective problem into a single-objective one is impossible, intractable or insufficient, making such RL algorithms inapplicable. We propose a multi-objective, multi-agent reinforcement learning (MARL) algorithm with high learning efficiency and low computational requirements, which automatically triggers adaptive few-shot learning in a dynamic, distributed and noisy environment with sparse and delayed reward. We test our algorithm in an ITS environment with edge cloud computing. Empirical results show that the algorithm is quick to adapt to new environments and performs better in all individual and system metrics compared to the state-of-the-art benchmark. Our algorithm also addresses various practical concerns with its modularized and asynchronous online training method. In addition to the cloud simulation, we test our algorithm on a single-board computer and show that it can make inference in 6 milliseconds.


Multi-agent Reinforcement Learning Improvement in a Dynamic Environment Using Knowledge Transfer

Mahdavimoghaddam, Mahnoosh, Nikanjam, Amin, Abdoos, Monireh

arXiv.org Artificial Intelligence

Cooperative multi-agent systems are being widely used in variety of areas. Interaction between agents would bring positive points, including reducing costs of operating, high scalability, and facilitating parallel processing. These systems pave the way for handling large-scale, unknown, and dynamic environments. However, learning in these environments has become a prominent challenge in different applications. These challenges include the effect of size of search space on learning time, inappropriate cooperation among agents, and the lack of proper coordination among agents' decisions. Moreover, reinforcement learning algorithms may suffer from long time of convergence in these problems. In this paper, a communication framework using knowledge transfer concepts is introduced to address such challenges in the herding problem with large state space. To handle the problems of convergence, knowledge transfer has been utilized that can significantly increase the efficiency of reinforcement learning algorithms. Coordination between the agents is carried out through a head agent in each group of agents and a coordinator agent respectively. The results demonstrate that this framework could indeed enhance the speed of learning and reduce convergence time.


Community Detection in Complex Networks Using Agents

Gunes, Ismail, Bingol, Haluk

arXiv.org Artificial Intelligence

Community structure identification has been one of the most popular research areas in recent years due to its applicability to the wide scale of disciplines. To detect communities in varied topics, there have been many algorithms proposed so far. However, most of them still have some drawbacks to be addressed. In this paper, we present an agent-based based community detection algorithm. The algorithm that is a stochastic one makes use of agents by forcing them to perform biased moves in a smart way. Using the information collected by the traverses of these agents in the network, the network structure is revealed. Also, the network modularity is used for determining the number of communities. Our algorithm removes the need for prior knowledge about the network such as number of the communities or any threshold values. Furthermore, the definite community structure is provided as a result instead of giving some structures requiring further processes. Besides, the computational and time costs are optimized because of using thread like working agents. The algorithm is tested on three network data of different types and sizes named Zachary karate club, college football and political books. For all three networks, the real network structures are identified in almost every run.